Goto

Collaborating Authors

 incremental development


NeuSemSlice: Towards Effective DNN Model Maintenance via Neuron-level Semantic Slicing

Zhou, Shide, Li, Tianlin, Huang, Yihao, Shi, Ling, Wang, Kailong, Liu, Yang, Wang, Haoyu

arXiv.org Artificial Intelligence

Deep Neural networks (DNNs), extensively applied across diverse disciplines, are characterized by their integrated and monolithic architectures, setting them apart from conventional software systems. This architectural difference introduces particular challenges to maintenance tasks, such as model restructuring (e.g., model compression), re-adaptation (e.g., fitting new samples), and incremental development (e.g., continual knowledge accumulation). Prior research addresses these challenges by identifying task-critical neuron layers, and dividing neural networks into semantically-similar sequential modules. However, such layer-level approaches fail to precisely identify and manipulate neuron-level semantic components, restricting their applicability to finer-grained model maintenance tasks. In this work, we implement NeuSemSlice, a novel framework that introduces the semantic slicing technique to effectively identify critical neuron-level semantic components in DNN models for semantic-aware model maintenance tasks. Specifically, semantic slicing identifies, categorizes and merges critical neurons across different categories and layers according to their semantic similarity, enabling their flexibility and effectiveness in the subsequent tasks. For semantic-aware model maintenance tasks, we provide a series of novel strategies based on semantic slicing to enhance NeuSemSlice. They include semantic components (i.e., critical neurons) preservation for model restructuring, critical neuron tuning for model re-adaptation, and non-critical neuron training for model incremental development. A thorough evaluation has demonstrated that NeuSemSlice significantly outperforms baselines in all three tasks.


Why hasn't AI delivered on its promise?

#artificialintelligence

Despite all this promise, adoption of AI is not where many expected (or hoped) that it might be. Research continues to improve the underlying AI technology--the recent of development of both Midjourney and Stable Diffusion4 is a case in point--and firms continue to invest in AI.5 We even saw a bump in investment during the first couple of years of the pandemic.6 However, a majority of AI projects fail.7 Compelling demonstrations are not transitioning into value creating solutions. Autonomous cars are a prime example, where commercial, mass market, versions constantly seem to be a decade away, despite early success and significant investment. We hear a similar story from AI practitioners working in firms attempting to leverage AI, with carefully developed models and solutions left on the bench as they are either not compelling enough or too fragile to replace existing solutions. There are notable successes, such as machine language translation, however there appears to have been more misses.


WekaIO, Tesla and Hitachi Vantara – Blocks and Files

#artificialintelligence

WekaIO's President sees the company as the Tesla of storage suppliers, and says OEM Hitachi Vantara is making inroads into the Dell EMC Isilon customer base as Weka crosses the chasm between it and general enterprise use. WekaIO's scalable, parallel and high-performance filesystem software has made its name in high-performance computing and become popular in enterprises that have HPC use cases -- such as AI, machine learning, and genomics. It's now set to cross over into more general enterprise file workloads. BMW motorcycle-riding Jonathan Martin became WekaIO's President this month. He had previously been the Chief Marketing Officer at Hitachi Vantara, serving from March 2019 to May 2021.


The AGI Significance Paradox

#artificialintelligence

As progress accelerates towards AGI, the number of people who realize the significance of each new breakthrough decreases. This is the AGI Significance Paradox. There is a very old metaphor that you can boil a frog in water without it jumping out when you gradually increase the temperature. The fable goes that the frog does not have the internal models to recognize that there is a change in the water temperature. A cold-blooded creature like the frog is thought to have its temperature regulated only by the external environment.